127 research outputs found

    Faster Replacement Paths

    Full text link
    The replacement paths problem for directed graphs is to find for given nodes s and t and every edge e on the shortest path between them, the shortest path between s and t which avoids e. For unweighted directed graphs on n vertices, the best known algorithm runtime was \tilde{O}(n^{2.5}) by Roditty and Zwick. For graphs with integer weights in {-M,...,M}, Weimann and Yuster recently showed that one can use fast matrix multiplication and solve the problem in O(Mn^{2.584}) time, a runtime which would be O(Mn^{2.33}) if the exponent \omega of matrix multiplication is 2. We improve both of these algorithms. Our new algorithm also relies on fast matrix multiplication and runs in O(M n^{\omega} polylog(n)) time if \omega>2 and O(n^{2+\eps}) for any \eps>0 if \omega=2. Our result shows that, at least for small integer weights, the replacement paths problem in directed graphs may be easier than the related all pairs shortest paths problem in directed graphs, as the current best runtime for the latter is \Omega(n^{2.5}) time even if \omega=2.Comment: the current version contains an improved resul

    OV Graphs Are (Probably) Hard Instances

    Get PDF
    © Josh Alman and Virginia Vassilevska Williams. A graph G on n nodes is an Orthogonal Vectors (OV) graph of dimension d if there are vectors v1, . . ., vn ∈ {0, 1}d such that nodes i and j are adjacent in G if and only if hvi, vji = 0 over Z. In this paper, we study a number of basic graph algorithm problems, except where one is given as input the vectors defining an OV graph instead of a general graph. We show that for each of the following problems, an algorithm solving it faster on such OV graphs G of dimension only d = O(log n) than in the general case would refute a plausible conjecture about the time required to solve sparse MAX-k-SAT instances: Determining whether G contains a triangle. More generally, determining whether G contains a directed k-cycle for any k ≥ 3. Computing the square of the adjacency matrix of G over Z or F2. Maintaining the shortest distance between two fixed nodes of G, or whether G has a perfect matching, when G is a dynamically updating OV graph. We also prove some complementary results about OV graphs. We show that any problem which is NP-hard on constant-degree graphs is also NP-hard on OV graphs of dimension O(log n), and we give two problems which can be solved faster on OV graphs than in general: Maximum Clique, and Online Matrix-Vector Multiplication

    Hardness of Easy Problems: Basing Hardness on Popular Conjectures such as the Strong Exponential Time Hypothesis (Invited Talk)

    Get PDF
    Algorithmic research strives to develop fast algorithms for fundamental problems. Despite its many successes, however, many problems still do not have very efficient algorithms. For years researchers have explained the hardness for key problems by proving NP-hardness, utilizing polynomial time reductions to base the hardness of key problems on the famous conjecture P != NP. For problems that already have polynomial time algorithms, however, it does not seem that one can show any sort of hardness based on P != NP. Nevertheless, we would like to provide evidence that a problem AA with a running time O(n^k) that has not been improved in decades, also requires n^{k-o(1)} time, thus explaining the lack of progress on the problem. Such unconditional time lower bounds seem very difficult to obtain, unfortunately. Recent work has concentrated on an approach mimicking NP-hardness: (1) select a few key problems that are conjectured to require T(n) time to solve, (2) use special, fine-grained reductions to prove time lower bounds for many diverse problems in P based on the conjectured hardness of the key problems. In this abstract we outline the approach, give some examples of hardness results based on the Strong Exponential Time Hypothesis, and present an overview of some of the recent work on the topic

    Fine-grained Algorithms and Complexity

    Get PDF
    A central goal of algorithmic research is to determine how fast computational problems can be solved in the worst case. Theorems from complexity theory state that there are problems that, on inputs of size n, can be solved in t(n) time but not in O(t(n)^{1-epsilon}) time for epsilon>0. The main challenge is to determine where in this hierarchy various natural and important problems lie. Throughout the years, many ingenious algorithmic techniques have been developed and applied to obtain blazingly fast algorithms for many problems. Nevertheless, for many other central problems, the best known running times are essentially those of their classical algorithms from the 1950s and 1960s. Unconditional lower bounds seem very difficult to obtain, and so practically all known time lower bounds are conditional. For years, the main tool for proving hardness of computational problems have been NP-hardness reductions, basing hardness on P neq NP. However, when we care about the exact running time (as opposed to merely polynomial vs non-polynomial), NP-hardness is not applicable, especially if the problem is already solvable in polynomial time. In recent years, a new theory has been developed, based on "fine-grained reductions" that focus on exact running times. Mimicking NP-hardness, the approach is to (1) select a key problem X that is conjectured to require essentially t(n) time for some t, and (2) reduce X in a fine-grained way to many important problems. This approach has led to the discovery of many meaningful relationships between problems, and even sometimes to equivalence classes. The main key problems used to base hardness on have been: the 3SUM problem, the CNF-SAT problem (based on the Strong Exponential Time Hypothesis (SETH)) and the All Pairs Shortest Paths Problem. Research on SETH-based lower bounds has flourished in particular in recent years showing that the classical algorithms are optimal for problems such as Approximate Diameter, Edit Distance, Frechet Distance, Longest Common Subsequence etc. In this talk I will give an overview of the current progress in this area of study, and will highlight some exciting new developments and their relationship to database theory
    • …
    corecore